534 research outputs found

    Using SMT Solving for the Lookup of Infeasible Paths in Binary Programs

    Get PDF
    Worst-Case Execution Time (WCET) is a key component to check temporal constraints of realtime systems. WCET by static analysis provides a safe upper bound. While hardware modelling is now efficient, loss of precision stems mainly in the inclusion of infeasible execution paths in the WCET calculation. This paper proposes a new method to detect such paths based on static analysis of machine code and the feasibility test of conditions using Satisfiability Modulo Theory (SMT) solvers. The experimentation shows promising results although the expected precision was slightly lowered due to clamping operations needed to cope with complexity explosion. An important point is that the implementation has been performed in the OTAWA framework and is independent of any instruction set thanks to its semantic instructions

    Using SMT Solving for the Lookup of Infeasible Paths in Binary Programs

    Get PDF
    International audienceWorst-Case Execution Time (WCET) is a key component to check temporal constraints of realtime systems. WCET by static analysis provides a safe upper bound. While hardware modelling is now efficient, loss of precision stems mainly in the inclusion of infeasible execution paths in the WCET calculation. This paper proposes a new method to detect such paths based on static analysis of machine code and the feasibility test of conditions using Satisfiability Modulo Theory (SMT) solvers. The experimentation shows promising results although the expected precision was slightly lowered due to clamping operations needed to cope with complexity explosion. An important point is that the implementation has been performed in the OTAWA framework and is independent of any instruction set thanks to its semantic instructions

    Détermination de propriétés de flot de données pour améliorer les estimations de temps d'exécution pire-cas

    Get PDF
    La recherche d'une borne supérieure au temps d'exécution d'un programme est une partie essentielle du processus de vérification de systèmes temps-réel critiques. Les programmes de tels systèmes ont généralement des temps d'exécution variables et il est difficile, voire impossible, de prédire l'ensemble de ces temps possibles. Au lieu de cela, il est préférable de rechercher une approximation du temps d'exécution pire-cas ou Worst-Case Execution Time (WCET). Une propriété cruciale de cette approximation est qu'elle doit être sûre, c'est-à-dire qu'elle doit être garantie de majorer le WCET. Parce que nous cherchons à prouver que le système en question se termine en un temps raisonnable, une surapproximation est le seul type d'approximation acceptable. La garantie de cette propriété de sûreté ne saurait raisonnablement se faire sans analyse statique, un résultat se basant sur une série de tests ne pouvant être sûr sans un traitement exhaustif des cas d'exécution. De plus, en l'absence de certification du processus de compilation (et de transfert des propriétés vers le binaire), l'extraction de propriétés doit se faire directement sur le code binaire pour garantir leur fiabilité. Toutefois, cette approximation a un coût : un pessimisme - écart entre le WCET estimé et le WCET réel - important entraîne des surcoûts superflus de matériel pour que le système respecte les contraintes temporelles qui lui sont imposées. Il s'agit donc ensuite, tout en maintenant la garantie de sécurité de l'estimation du WCET, d'améliorer sa précision en réduisant cet écart de telle sorte qu'il soit suffisamment faible pour ne pas entraîner des coûts supplémentaires démesurés. Un des principaux facteurs de surestimation est la prise en compte de chemins d'exécution sémantiquement impossibles, dits infaisables, dans le calcul du WCET. Ceci est dû à l'analyse par énumération implicite des chemins ou Implicit Path Enumeration Technique (IPET) qui raisonne sur un surensemble des chemins d'exécution. Lorsque le chemin d'exécution pire-cas ou Worst-Case Execution Path (WCEP), correspondant au WCET estimé, porte sur un chemin infaisable, la précision de cette estimation est négativement affectée. Afin de parer à cette perte de précision, cette thèse propose une technique de détection de chemins infaisables, permettant l'amélioration de la précision des analyses statiques (dont celles pour le WCET) en les informant de l'infaisabilité de certains chemins du programme. Cette information est passée sous la forme de propriétés de flot de données formatées dans un langage d'annotation portable, FFX, permettant la communication des résultats de notre analyse de chemins infaisables vers d'autres analyses. Les méthodes présentées dans cette thèse sont inclues dans le framework OTAWA, développé au sein de l'équipe TRACES à l'IRIT. Elles usent elles-mêmes d'approximations pour représenter les états possibles de la machine en différents points du programme. Ce sont des abstractions maintenues au fil de l'analyse, et dont la validité est assurée par des outils de la théorie de l'interprétation abstraite. Ces abstractions permettent de représenter de manière efficace - mais sûre - les ensembles d'états pour une classe de chemins d'exécution jusqu'à un point du programme, et de détecter d'éventuels points du programme associés à un ensemble d'états possibles vide, traduisant un (ou plusieurs) chemin(s) infaisable(s). L'objectif de l'analyse développée, la détection de tels cas, est rendue possible par l'usage de solveurs SMT (Satisfiabilité Modulo des Théories). Ces solveurs permettent essentiellement de déterminer la satisfiabilité d'un ensemble de contraintes, déduites à partir des états abstraits construits. Lorsqu'un ensemble de contraintes, formé à partir d'une conjonction de prédicats, s'avère insatisfiable, aucune valuation des variables de la machine ne correspond à un cas d'exécution possible, et la famille de chemins associée est donc infaisable. L'efficacité de cette technique est soutenue par une série d'expérimentations sur divers suites de benchmarks, reconnues dans le domaine du WCET statique et/ou issues de cas réels de l'industrie. Des heuristiques sont configurées afin d'adoucir la complexité de l'analyse, en particulier pour les applications de plus grande taille. Les chemins infaisables détectés sont injectés sous la forme de contraintes de flot linéaires dans le système de Programmation Linéaire en Nombres Entiers ou Integer Linear Programming (ILP) pilotant le calcul final de l'analyse WCET d'OTAWA. Selon le programme analysé, cela peut résulter en une réduction du WCET estimé, et donc une amélioration de sa précision.The search for an upper bound of the execution time of a program is an essential part of the verification of real-time critical systems. The execution times of the programs of such systems generally vary a lot, and it is difficult, or impossible, to predict the range of the possible times. Instead, it is better to look for an approximation of the Worst-Case Execution Time (WCET). A crucial requirement of this estimate is that it must be safe, that is, it must be guaranteed above the real WCET. Because we are looking to prove that the system in question terminates reasonably quickly, an overapproximation is the only acceptable form of approximation. The guarantee of such a safety property could not sensibly be done without static analysis, as a result based on a battery of tests could not be safe without an exhaustive handling of test cases. Furthermore, in the absence of a certified compiler (and tech- nique for the safe transfer of properties to the binaries), the extraction of properties must be done directly on binary code to warrant their soundness. However, this approximation comes with a cost : an important pessimism, the gap between the estimated WCET and the real WCET, would lead to superfluous extra costs in hardware in order for the system to respect the imposed timing requirements. It is therefore important to improve the precision of the WCET by reducing this gap, while maintaining the safety property, as such that it is low enough to not lead to immoderate costs. A major cause of overestimation is the inclusion of semantically impossible paths, said infeasible paths, in the WCET computation. This is due to the use of the Implicit Path Enumeration Technique (IPET), which works on an superset of the possible execution paths. When the Worst-Case Execution Path (WCEP), corresponding to the estimated WCET, is infeasible, the precision of that estimation is negatively affected. In order to deal with this loss of precision, this thesis proposes an infeasible paths detection technique, enabling the improvement of the precision of static analyses (namely for WCET estimation) by notifying them of the infeasibility of some paths of the program. This information is then passed as data flow properties, formatted in the FFX portable annotation language, and allowing the communication of the results of our infeasible path analysis to other analyses. The methods hereafter presented are included in the OTAWA framework, developed in TRACES team at the IRIT lab. They themselves make use of approximations in order to represent the possible states of the machine in various program points. These approximations are abstractions maintained throughout the analysis, and which validity is ensured by abstract interpretation tools. They enable us to represent the set of states for a family of execution paths up to a given program point in an efficient - yet safe - way, and to detect the potential program points associated to an empty set of possible states, signalling one (or several) infeasible path(s). As the end goal of the developed analysis, the detection of such cases is made possible by the use of Satisfiability Modulo Theory (SMT) solvers. Those solvers are notably able to determine the satisfiability of a set of contraints, which we deduct from the abstract states. If a set of constraints, derived from a conjonction of predicates, is unsatisfiable, then there exists no valuation of the machine variables that match a possible execution case, and thus the associated infeasible paths are infeasible. The efficiency of this technique is asserted by a series of experiments on various benchmarks suites, some of which widely recognized in the domain of static WCET, some others derived from actual industrial applications. Heuristics are set up in order to soften the complexity of the analysis, especially for the larger applications. The detected infeasible paths are injected as Integer Linear Programming (ILP) linear data flow constraints in the final computation for the WCET estimation in OTAWA. Depending on the analysed program, this can result in a reduction of the estimated WCET, thereby improving its precision

    Understanding nitrogen transfer dynamics in a small agricultural catchment: Comparison of a distributed (TNT2) and a semi distributed (SWAT) modeling approaches

    Get PDF
    The coupling of an hydrological and a crop model is an efficient approach to study the impact of the interactions between agricultural practices and catchment physical characteristics on stream water quality. We analyzed the consequences of using different modeling approaches of the processes controlling the nitrogen (N) dynamics in a small agricultural catchment monitored for 15 years. Two agro-hydrological models were applied: the fully distributed model TNT2 and the semi-distributed SWAT model. Using the same input dataset, the calibration process aimed at reproducing the same annual water and N balance in both models, to compare the spatial and temporal variability of the main N processes. The models simulated different seasonal cycles for soil N. The main processes involved were N mineralization and denitrification. TNT2 simulated marked seasonal variations with a net increase of mineralization in autumn, after a transient immobilization phase due to the burying of the straw with low C:N ratio. SWAT predicted a steady humus mineralization with an increase when straws are buried and a decrease afterwards. Denitrification was mainly occuring in autumn in TNT2 because of the dynamics of N availability in soil and of the climatic and hydrological conditions. SWAT predicts denitrification in winter, when mineral N is available in soil layers. The spatial distribution of these two processes was different as well: less denitrification in bottom land and close to ditches in TNT2, as a result of N transfer dynamics. Both models simulate correctly global trend and inter-annual variability of N losses in small agricultural catchment when a sufficient amount data is available for calibration. However, N processes and their spatial interactions are simulated very differently, in particular soil mineralization and denitrification. The use of such tools for prediction must be considered with care, unless a proper calibration and validation of the different N processes is carried out

    First Sagittarius A* Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole in the Center of the Milky Way

    Get PDF
    We present the first Event Horizon Telescope (EHT) observations of Sagittarius A* (Sgr A*), the Galactic center source associated with a supermassive black hole. These observations were conducted in 2017 using a global interferometric array of eight telescopes operating at a wavelength of λ = 1.3 mm. The EHT data resolve a compact emission region with intrahour variability. A variety of imaging and modeling analyses all support an image that is dominated by a bright, thick ring with a diameter of 51.8 \ub1 2.3 μas (68% credible interval). The ring has modest azimuthal brightness asymmetry and a comparatively dim interior. Using a large suite of numerical simulations, we demonstrate that the EHT images of Sgr A* are consistent with the expected appearance of a Kerr black hole with mass ∼4 7 106 M☉, which is inferred to exist at this location based on previous infrared observations of individual stellar orbits, as well as maser proper-motion studies. Our model comparisons disfavor scenarios where the black hole is viewed at high inclination (i > 50\ub0), as well as nonspinning black holes and those with retrograde accretion disks. Our results provide direct evidence for the presence of a supermassive black hole at the center of the Milky Way, and for the first time we connect the predictions from dynamical measurements of stellar orbits on scales of 103-105 gravitational radii to event-horizon-scale images and variability. Furthermore, a comparison with the EHT results for the supermassive black hole M87* shows consistency with the predictions of general relativity spanning over three orders of magnitude in central mass

    Electric dipole moments and the search for new physics

    Get PDF
    Static electric dipole moments of nondegenerate systems probe mass scales for physics beyond the Standard Model well beyond those reached directly at high energy colliders. Discrimination between different physics models, however, requires complementary searches in atomic-molecular-and-optical, nuclear and particle physics. In this report, we discuss the current status and prospects in the near future for a compelling suite of such experiments, along with developments needed in the encompassing theoretical framework.Comment: Contribution to Snowmass 2021; updated with community edits and endorsement

    A Universal Power-law Prescription for Variability from Synthetic Images of Black Hole Accretion Flows

    Get PDF
    We present a framework for characterizing the spatiotemporal power spectrum of the variability expected from the horizon-scale emission structure around supermassive black holes, and we apply this framework to a library of general relativistic magnetohydrodynamic (GRMHD) simulations and associated general relativistic ray-traced images relevant for Event Horizon Telescope (EHT) observations of Sgr A*. We find that the variability power spectrum is generically a red-noise process in both the temporal and spatial dimensions, with the peak in power occurring on the longest timescales and largest spatial scales. When both the time-averaged source structure and the spatially integrated light-curve variability are removed, the residual power spectrum exhibits a universal broken power-law behavior. On small spatial frequencies, the residual power spectrum rises as the square of the spatial frequency and is proportional to the variance in the centroid of emission. Beyond some peak in variability power, the residual power spectrum falls as that of the time-averaged source structure, which is similar across simulations; this behavior can be naturally explained if the variability arises from a multiplicative random field that has a steeper high-frequency power-law index than that of the time-averaged source structure. We briefly explore the ability of power spectral variability studies to constrain physical parameters relevant for the GRMHD simulations, which can be scaled to provide predictions for black holes in a range of systems in the optically thin regime. We present specific expectations for the behavior of the M87* and Sgr A* accretion flows as observed by the EHT

    Monitoring the Morphology of M87* in 2009–2017 with the Event Horizon Telescope

    Get PDF
    The Event Horizon Telescope (EHT) has recently delivered the first resolved images of M87*, the supermassive black hole in the center of the M87 galaxy. These images were produced using 230 GHz observations performed in 2017 April. Additional observations are required to investigate the persistence of the primary image feature—a ring with azimuthal brightness asymmetry—and to quantify the image variability on event horizon scales. To address this need, we analyze M87* data collected with prototype EHT arrays in 2009, 2011, 2012, and 2013. While these observations do not contain enough information to produce images, they are sufficient to constrain simple geometric models. We develop a modeling approach based on the framework utilized for the 2017 EHT data analysis and validate our procedures using synthetic data. Applying the same approach to the observational data sets, we find the M87* morphology in 2009–2017 to be consistent with a persistent asymmetric ring of ~40 μas diameter. The position angle of the peak intensity varies in time. In particular, we find a significant difference between the position angle measured in 2013 and 2017. These variations are in broad agreement with predictions of a subset of general relativistic magnetohydrodynamic simulations. We show that quantifying the variability across multiple observational epochs has the potential to constrain the physical properties of the source, such as the accretion state or the black hole spin

    The Forward Physics Facility at the High-Luminosity LHC

    Get PDF
    corecore